#AI problem-solving techniques
Explore tagged Tumblr posts
jcmarchi · 5 months ago
Text
DeepMind’s Mind Evolution: Empowering Large Language Models for Real-World Problem Solving
New Post has been published on https://thedigitalinsider.com/deepminds-mind-evolution-empowering-large-language-models-for-real-world-problem-solving/
DeepMind’s Mind Evolution: Empowering Large Language Models for Real-World Problem Solving
Tumblr media Tumblr media
In recent years, artificial intelligence (AI) has emerged as a practical tool for driving innovation across industries. At the forefront of this progress are large language models (LLMs) known for their ability to understand and generate human language. While LLMs perform well at tasks like conversational AI and content creation, they often struggle with complex real-world challenges requiring structured reasoning and planning.
For instance, if you ask LLMs to plan a multi-city business trip that involves coordinating flight schedules, meeting times, budget constraints, and adequate rest, they can provide suggestions for individual aspects. However, they often face challenges in integrating these aspects to effectively balance competing priorities. This limitation becomes even more apparent as LLMs are increasingly used to build AI agents capable of solving real-world problems autonomously.
Google DeepMind has recently developed a solution to address this problem. Inspired by natural selection, this approach, known as Mind Evolution, refines problem-solving strategies through iterative adaptation. By guiding LLMs in real-time, it allows them to tackle complex real-world tasks effectively and adapt to dynamic scenarios. In this article, we’ll explore how this innovative method works, its potential applications, and what it means for the future of AI-driven problem-solving.
Why LLMs Struggle With Complex Reasoning and Planning
LLMs are trained to predict the next word in a sentence by analyzing patterns in large text datasets, such as books, articles, and online content. This allows them to generate responses that appear logical and contextually appropriate. However, this training is based on recognizing patterns rather than understanding meaning. As a result, LLMs can produce text that appears logical but struggle with tasks that require deeper reasoning or structured planning.
The core limitation lies in how LLMs process information. They focus on probabilities or patterns rather than logic, which means they can handle isolated tasks—like suggesting flight options or hotel recommendations—but fail when these tasks need to be integrated into a cohesive plan. This also makes it difficult for them to maintain context over time. Complex tasks often require keeping track of previous decisions and adapting as new information arises. LLMs, however, tend to lose focus in extended interactions, leading to fragmented or inconsistent outputs.
 How Mind Evolution Works
DeepMind’s Mind Evolution addresses these shortcomings by adopting principles from natural evolution. Instead of producing a single response to a complex query, this approach generates multiple potential solutions, iteratively refines them, and selects the best outcome through a structured evaluation process. For instance, consider team brainstorming ideas for a project. Some ideas are great, others less so. The team evaluates all ideas, keeping the best and discarding the rest. They then improve the best ideas, introduce new variations, and repeat the process until they arrive at the best solution. Mind Evolution applies this principle to LLMs.
Here’s a breakdown of how it works:
Generation: The process begins with the LLM creating multiple responses to a given problem. For example, in a travel-planning task, the model may draft various itineraries based on budget, time, and user preferences.
Evaluation: Each solution is assessed against a fitness function, a measure of how well it satisfies the tasks’ requirements. Low-quality responses are discarded, while the most promising candidates advance to the next stage.
Refinement: A unique innovation of Mind Evolution is the dialogue between two personas within the LLM: the Author and the Critic. The Author proposes solutions, while the Critic identifies flaws and offers feedback. This structured dialogue mirrors how humans refine ideas through critique and revision. For example, if the Author suggests a travel plan that includes a restaurant visit exceeding the budget, the Critic points this out. The Author then revises the plan to address the Critic’s concerns. This process enables LLMs to perform deep analysis which it could not perform previously using other prompting techniques.
Iterative Optimization: The refined solutions undergo further evaluation and recombination to produce refined solutions.
By repeating this cycle, Mind Evolution iteratively improves the quality of solutions, enabling LLMs to address complex challenges more effectively.
Mind Evolution in Action
DeepMind tested this approach on benchmarks like TravelPlanner and Natural Plan. Using this approach, Google’s Gemini achieved a success rate of 95.2% on TravelPlanner which is an outstanding improvement from a baseline of 5.6%. With the more advanced Gemini Pro, success rates increased to nearly 99.9%. This transformative performance shows the effectiveness of mind evolution in addressing practical challenges.
Interestingly, the model’s effectiveness grows with task complexity. For instance, while single-pass methods struggled with multi-day itineraries involving multiple cities, Mind Evolution consistently outperformed, maintaining high success rates even as the number of constraints increased.
Challenges and Future Directions
Despite its success, Mind Evolution is not without limitations. The approach requires significant computational resources due to the iterative evaluation and refinement processes. For example, solving a TravelPlanner task with Mind Evolution consumed three million tokens and 167 API calls—substantially more than conventional methods. However, the approach remains more efficient than brute-force strategies like exhaustive search.
Additionally, designing effective fitness functions for certain tasks could be a challenging task. Future research may focus on optimizing computational efficiency and expanding the technique’s applicability to a broader range of problems, such as creative writing or complex decision-making.
Another interesting area for exploration is the integration of domain-specific evaluators. For instance, in medical diagnosis, incorporating expert knowledge into the fitness function could further enhance the model’s accuracy and reliability.
Applications Beyond Planning
Although Mind Evolution is mainly evaluated on planning tasks, it could be applied to various domains, including creative writing, scientific discovery, and even code generation. For instance, researchers have introduced a benchmark called StegPoet, which challenges the model to encode hidden messages within poems. Although this task remains difficult, Mind Evolution exceeds traditional methods by achieving success rates of up to 79.2%.
The ability to adapt and evolve solutions in natural language opens new possibilities for tackling problems that are difficult to formalize, such as improving workflows or generating innovative product designs. By employing the power of evolutionary algorithms, Mind Evolution provides a flexible and scalable framework for enhancing the problem-solving capabilities of LLMs.
The Bottom Line
DeepMind’s Mind Evolution introduces a practical and effective way to overcome key limitations in LLMs. By using iterative refinement inspired by natural selection, it enhances the ability of these models to handle complex, multi-step tasks that require structured reasoning and planning. The approach has already shown significant success in challenging scenarios like travel planning and demonstrates promise across diverse domains, including creative writing, scientific research, and code generation. While challenges like high computational costs and the need for well-designed fitness functions remain, the approach provides a scalable framework for improving AI capabilities. Mind Evolution sets the stage for more powerful AI systems capable of reasoning and planning to solve real-world challenges.
0 notes
ismailfazil1-blog · 10 months ago
Text
The Human Brain vs. Supercomputers: The Ultimate Comparison
Are Supercomputers Smarter Than the Human Brain?
This article delves into the intricacies of this comparison, examining the capabilities, strengths, and limitations of both the human brain and supercomputers.
Tumblr media
5 notes · View notes
Text
Nerd gojo x nerd reader! Headcanons
Tumblr media
Tumblr media Tumblr media Tumblr media
Nerd!Gojo x Nerd!You Headcanons
Part 2 ♡ ♡ ♡ ♡
♡ Gojo Satoru, the prodigy. The guy who solves complex math problems in his head like it’s a simple 2+2. If someone ask him how, he’ll just smirk and say, “Just run your mind faster.” As if that makes sense.
♡ Gojo, the last-minute genius. He does his assignments at the last possible second but still gets a perfect score. People have accused him of using black magic. He doesn’t deny it.
♡ Gojo, the overanalyzer. Someone calls him a know it all as a joke, and next thing they know, they’re stuck listening to a 30-minute breakdown of why intelligence is subjective and how human perception affects knowledge.
♡ Gojo, the human stopwatch. He calculates the exact time people take to do the most random things:
Shoko takes exactly 3.2 seconds to process a joke before laughing.
Suguru sniffs his food for 2.6 seconds before deciding if it’s poisoned.
His teacher blinks an average of 18 times per minute when lecturing.
♡ Gojo, the walking encyclopedia. He acts like he knows everything psychology, physics, chemistry, math. Whether he actually does or not is debatable, but he’ll never admit he’s wrong.
♡ Gojo, the fact machine. He drops random trivia constantly, just to flex. “Did you know honey never spoils?” “Gojo, no one cares.”
♡ Gojo, the exam escape artist. He drags Suguru out to do something totally unproductive before exams, but somehow still tops the class while Suguru barely passes. Suguru has stopped questioning it.
♡ Gojo, the romance skeptic. Laughs in the face of love at first sight, listing the exact probability of it happening.
♡ Gojo, the worst date ever. He once explained The Art of War on a date. The girl left before dessert. He still doesn’t know why.
♡ Gojo, the secret romance reader. He totally didn’t get caught reading a romance novel in the library. And he totally didn’t like it.
Then, there’s you.
♡ You, the transfer student. No expression. No reaction. The class went dead silent when you walked in, as if even breathing would be too loud. The teacher praised you, and you just nodded like it didn’t matter.
♡ You, Gojo’s accidental rival. Sitting next to him was a nightmare. He asked the most stupid questions, and you ignored all of them. He assumed you were just an edgy wannabe. That made him laugh.
♡ You, the real threat. When exam results came out, Gojo was shook. For the first time, he wasn’t the top scorer. You were. And your reaction? A shrug. No smile, no satisfaction. That’s when you became interesting.
♡ Gojo, the forced study partner. He forced the teacher to make you his partner. You weren’t amused.
“Why do I need to do practicals if I already know the answer?” you questioned
“To see if it’s true or not, dummy.” He grinned, waiting for your response.
“If it’s in the book, it’s already true.” He had never wanted to strangle someone and marry them at the same time before.
♡ Gojo, the doomed fool. No one ever entertained his nerdy ramblings, but you? You matched his energy. When you started debating him on his own topics, he knew he was done for.
♡ Gojo, the AI skeptic. He swears you talk like a robot.
“That’s not an effective method.”
“This is scientifically incorrect.”
“Are you a government experiment?”
♡ Gojo, the challenge seeker. He constantly challenged you to competitions. You refused every time. “Not interested in unnecessary drama.” That hurt his soul.
♡ Gojo, the frustrated observer. He needed to see a crack in your facade. Anything. He studied your every move, trying to prove you weren’t an AI.
♡ Gojo, the mimic. He caught you muttering the pi table to regain focus. He immediately adopted the technique.
♡ Gojo, the sore winner. If he scored higher than you, he wasn’t happy he was annoyed. What’s the point if you don’t even care?
♡ Gojo, the reluctant believer. He told you about his hobbies with way too much excitement. You told him about yours, but your blank expression made him question if you were lying.
♡ Gojo, the paranoid calculator. He tried analyzing your movements, but everything about you was too precise. It freaked him out.
♡ Gojo, the not-so-subtle spy. Since you lived next to Suguru, he used that as an excuse to observe you. Every time he saw you, you were either studying or staring out the window like a lifeless statue. You caught him multiple times. Instead of yelling, you just stared at him. It was terrifying.
♡ Gojo, the insecure nerd. He nervously brought up Dungeons & Dragons, expecting you to be clueless. Instead, you knew everything. He had never felt average before.
♡ Gojo, the desk menace. He constantly poked you during class, hoping for any reaction. You just stared at him, unblinking, until he became flustered and left.
♡ Gojo, the insane conversationalist. He told you the wildest theories, and you listened like it was just another casual conversation. It drove him insane.
Tumblr media
It took me 4 days to think of a gojo nerd scenerio 😭
And you GUYS HAVE TO REQUEST DO IT
Part 2 will be here
@naomigojo
824 notes · View notes
kenyatta · 2 months ago
Text
New techniques for probing large language models—part of a growing field known as “mechanistic interpretability”—show researchers the way these AIs do mathematics, learn to play games or navigate through environments. In a series of recent essays, Mitchell argued that a growing body of work shows that it seems possible models develop gigantic “bags of heuristics,” rather than create more efficient mental models of situations and then reasoning through the tasks at hand. (“Heuristic” is a fancy word for a problem-solving shortcut.)
When Keyon Vafa, an AI researcher at Harvard University, first heard the “bag of heuristics” theory, “I feel like it unlocked something for me,” he says. “This is exactly the thing that we’re trying to describe.”
- We Now Know How AI ‘Thinks’—and It’s Barely Thinking at All - WSJ
143 notes · View notes
canmom · 5 months ago
Text
using LLMs to control a game character's dialogue seems an obvious use for the technology. and indeed people have tried, for example nVidia made a demo where the player interacts with AI-voiced NPCs:
youtube
this looks bad, right? like idk about you but I am not raring to play a game with LLM bots instead of human-scripted characters. they don't seem to have anything interesting to say that a normal NPC wouldn't, and the acting is super wooden.
so, the attempts to do this so far that I've seen have some pretty obvious faults:
relying on external API calls to process the data (expensive!)
presumably relying on generic 'you are xyz' prompt engineering to try to get a model to respond 'in character', resulting in bland, flavourless output
limited connection between game state and model state (you would need to translate the relevant game state into a text prompt)
responding to freeform input, models may not be very good at staying 'in character', with the default 'chatbot' persona emerging unexpectedly. or they might just make uncreative choices in general.
AI voice generation, while it's moved very fast in the last couple years, is still very poor at 'acting', producing very flat, emotionless performances, or uncanny mismatches of tone, inflection, etc.
although the model may generate contextually appropriate dialogue, it is difficult to link that back to the behaviour of characters in game
so how could we do better?
the first one could be solved by running LLMs locally on the user's hardware. that has some obvious drawbacks: running on the user's GPU means the LLM is competing with the game's graphics, meaning both must be more limited. ideally you would spread the LLM processing over multiple frames, but you still are limited by available VRAM, which is contested by the game's texture data and so on, and LLMs are very thirsty for VRAM. still, imo this is way more promising than having to talk to the internet and pay for compute time to get your NPC's dialogue lmao
second one might be improved by using a tool like control vectors to more granularly and consistently shape the tone of the output. I heard about this technique today (thanks @cherrvak)
third one is an interesting challenge - but perhaps a control-vector approach could also be relevant here? if you could figure out how a description of some relevant piece of game state affects the processing of the model, you could then apply that as a control vector when generating output. so the bridge between the game state and the LLM would be a set of weights for control vectors that are applied during generation.
this one is probably something where finetuning the model, and using control vectors to maintain a consistent 'pressure' to act a certain way even as the context window gets longer, could help a lot.
probably the vocal performance problem will improve in the next generation of voice generators, I'm certainly not solving it. a purely text-based game would avoid the problem entirely of course.
this one is tricky. perhaps the model could be taught to generate a description of a plan or intention, but linking that back to commands to perform by traditional agentic game 'AI' is not trivial. ideally, if there are various high-level commands that a game character might want to perform (like 'navigate to a specific location' or 'target an enemy') that are usually selected using some other kind of algorithm like weighted utilities, you could train the model to generate tokens that correspond to those actions and then feed them back in to the 'bot' side? I'm sure people have tried this kind of thing in robotics. you could just have the LLM stuff go 'one way', and rely on traditional game AI for everything besides dialogue, but it would be interesting to complete that feedback loop.
I doubt I'll be using this anytime soon (models are just too demanding to run on anything but a high-end PC, which is too niche, and I'll need to spend time playing with these models to determine if these ideas are even feasible), but maybe something to come back to in the future. first step is to figure out how to drive the control-vector thing locally.
48 notes · View notes
cleolinda · 2 months ago
Text
Weekend links, May 4, 2025
My posts
I am still struggling with the fifth Silent Hill 2 commentary; the video I recorded last week (4/30) also isn't usable. Like, maybe I'll post it on Patreon as an extra at some point, but it is not the level of excellence we strive for here at Cleolinda Industries. Within hours of that, April's parting shot was to knock me down with another head cold, BUT, now that I have escaped its grip (April's, not the cold's. I'm still sick), I may have solved my OBS problems. I DON'T KNOW. WE LIVE IN HOPE. 
Meanwhile, Ian's off in fuckin' Brookhaven Hospital. IT'S. FINE. (I'm in the chat, we figure out some good stuff about the lore, it's a good time.) Also, at the top of his stream, he had sound engineer Andy Sudol on to talk about the differences between the 2001 and 2024 soundtracks. 
Signal boost while we're talking about games: I'm doing really well on light combat in SH2, except for when my neuropathy acts up and my fingers just decide they don't want to participate anymore (it's a good bet this has happened if I start screaming "JAMES WHAT ARE YOU DOING??"), so these mods and resources for disabled gamers caught my eye.  
Reblogs of interest
@mamoru looking out for us on the food safety front
Y'all, I don't know what's going on with Pinterest, but don't breathe too hard right now. An update from Reddit: More news outlets are reporting the sudden mass ban wave these last two weeks
My personal question: how does it actually BENEFIT companies to make their product unusable, though? I understand the answer, and yet, as a person who can think more than five seconds ahead into the future, I completely do not understand the answer.  
This question was also partly inspired by Polygon getting sold/gutted, in the sense of this Reddit reply.
Oh, I wasn't even thinking of Duolingo asserting itself as an "AI-first" company even as people complain that the quality of the app has plunged, so fuck them too I guess
PSA about some scam call techniques
I had to tell my therapist that I was facetiously done with life and everything in it, so I get this post
Good (and cute) news: "you can sponsor your own big beautiful TB- or landmine-detecting rat through APOPO HeroRATS"; "First-of-its-kind lab breeds bumblebee babies to save species from extinction"
Zines: I Am Not Your Asian American Doll
Speaking of Silent Hill 2: "this is how tag searches feel"
"askjeeves how to smuggle 30 naked prisoners (assorted genders) out of vampire mansion time sensitive."
"no, you’re thinking of fusion and fission. Bisexuals result in two nuclei that are identical to the original nucleus. Pansexuals result in four nuclei with half the number of chromosomes of the original cell"
"oh to be the black blob of a cat in vanessa stockard's paintings"
In tough times, there is one thing thou must always remember
All of these are horses
"Goblin learns they have a racist sword": some fantasy ideas
Flip the Frog gets restored
I'm particularly amused by these Vanillary reviews because I have it as a solid perfume and it's fine. 
I agree with all of these expletive/accent pairings.
A feline boo ghost to go with last week's ghost dog photoshoot
"The tribes of Tumblr appeared to worship Apollo as their primary patron deity, most often under the epithet Apollo Spairahemon ('Apollo the Ball-Thrower')"
Video
Wet beast Wednesday: "MOVE IDIOT"
Blumineck has a new approach to the three-arrow trick shot
"i know this is a predator. like a hardened killing machine. tempered by hundreds of years of evolutionary prowess to fine tune him into a living weapon. but"
Finsync
Good guy who talks like a bad guy
I honestly was not prepared for anything in this anecdote about buying a printer
The sacred texts
The iconic "girl… what were YOU doing at the devils sacrament 👀"
Personal tags of the week
I will be adding to "with mama" as often as possible. (You know what? I also need to add to dragons.)
18 notes · View notes
do-you-have-a-flag · 23 days ago
Text
putting aside the ethics of 'A.I' videos in their creation/usage/waste/economics, just on a purely technical level one thing i find interesting is no matter if the result looks photorealistic or like 3d CGI- it's all technically 2d image generation.
unless specifically used as an add on in a software for 3d rendering, of course, pretty much every ai video you see online is 2d art. the space rendered is a single plane, think of it like doing a digital painting on a single layer. the depth/perspective is an illusion that is frame by frame being rendered to the best ability of prediction based on data it has been fed.
obviously videos of 3d models in animation are a 2d file. like a pixar movie. but in video games you do have a fully rendered 3d character in a 3d rendered space, that's why glitches that clip through environments are so funny. it's efficient to have stock animations and interaction conditions programmed onto rigged dolls and sets.
by contrast if you were to use a generative ai in a similar context it would be real time animating a series of illustrations. of sounds and scenarios. the complexity required for narrative consistency and the human desire to fuck up restrictions hits up against a much more randomised set of programming. how would it deal with continuity of setting and personality? obviously chatbots already exist but as the fortnight darth vader debacle recently shows there are limits to slapping a skin on a stock chatbot rather than building one custom.
i just think that there's so many problems that come from trying to make an everything generator that don't exist in the mediums it is trying to usurp because those mediums have a built in problem solving process that is inherent to the tools and techniques that make them up.
but also also, very funny to see algorithmic 2D pixel generation being slapped with every label "this photo, this video, this 3d render" like it is at best description cgi, let's call it what it is.
but i could of course be wrong in my understanding of this technology, so feel free to correct me if you have better info, but my basic understanding of this tech is: binary code organised by -> human programming code to create -> computer software code that -> intakes information from data sets to output -> pixels and audio waveforms
9 notes · View notes
stuffforthestash · 1 year ago
Text
Modern Academic AU pt2
Originally started because Professor Raphael got stuck in my head and I had (foolishly) hoped if I wrote down some thoughts, that would be the end of it 🫠
Part 1 and Part 3 ------------------------------ Minthara - School of Law. Used to be a high profile defense lawyer but was barred from practice under questionable circumstances, so now she teaches courses on criminal procedure and domestic violence litigation. Male students are actively warned against taking any of her classes. Elminster - Liberal Arts Dean. Has been in the position forever and is something of a legend at this point. He's Gale's mentor and long time family friend, and he delights in showing up unannounced to Prof. Dekarios's lectures. The two of them have a longstanding tradition of leaving surprise pranks in each others offices. Rolan - English department. Newly upgraded from adjunct instructor to junior full time staff, he's been assigned the special hell of having to teach the general ed. introductory writing courses that none of the other faculty want to deal with. He hates it and thinks it's a complete waste of his talents, but is determined to stick through it long enough to get that research grant. Alfira - School of Theater & Music. Teaches vocal technique and musicality at every level. She's also the faculty coordinator for multiple on-campus performance groups, directs the university chorale and composes all their arrangements, is herself in a local acapella group, AND does community arts & outreach programs for kids.
Gortash - Newly appointed Dean of Information Studies. He's brilliant, he talks big about new frontiers in infosec and grand designs in the future potential of AI... and is already under investigation by the ethics board for misappropriation of university funds. Ketheric - VP of Alumni and he's been with the university longer than Elminster. Nobody knows why he hasn't just retired yet, despite how much he seems to hate his job. Orin - School of Fine Art. She "teaches" a course on performative art. It's weird and extremely uncomfortable for everyone involved, but for some reason people keep enrolling. Durge - Fine Art Dep't Chair. The deeply disturbing nature of his personal art aside, he's actually good at his job as both the chair and an instructor. Mostly teaches anatomy and live model studio courses. Ulder - VP of Public Affairs. He's a great public face for the university, everybody loves him... except the son he refuses to acknowledge after a falling out years ago. Mizora - Human resources admin. Loves her job because it gives her power over other people. Is more likely to be the source of an HR complaint than the one who actually solves the problem. Thaniel (as requested!) - Also HR. He's the one you hope gets assigned to whatever you need because he's great at it. Is also the only one who can reliably get in touch with Halsin; it's not well known that he can, so he'll usually agree to help those who figure out to ask him.
------------------------------
This started going long, so it looks like I'll be doing a third (and probably final?) installment to cover Dammon, Zevlor, Wulbren, Aylin & Isobel, and any other requests!
56 notes · View notes
cetaceanhandiwork · 2 years ago
Text
the conversation around generative neural networks is a dumpster fire in a dozen different ways but I think the part that disproportionately frustrates me, like on an irrational pet peeve level, is that nobody in that conversation seems to understand automata theory
back before most of these deep learning techniques were a twinkle in a theorist's eye, back when computing was a lot less engineering and a lot more math, computer scientists had worked out the math of different "classes" of computer system and what kinds of problems they could and couldn't solve
these aren't arbitrary classifications like most taxonomy turns out to be. there's qualitative differences. you can draw hard lines: "it takes class X or above to run programs with Y trait", and "only class X programs or below are guaranteed to have Y trait". and all of those lines have been mathematically proven; if you ever found a counterexample, then we'd be in "math is a lot of bunk" territory and we'd have way bigger things to worry about
this has nothing to do with how fast/slow the computer system goes; it's about "what kinds of program can it run at all". so it includes emulation and such. you can emulate a lower system in a higher one, but not vice versa
at the top of this heap is turing machines, which includes most computers we'd bother to build. there's a lot of programs that it's been mathematically proven require at least a turing machine to run. and this class of programs includes a lot of things that humans can do, too
but with this power comes some inevitable restrictions. for example, if you feed a program to a turing machine, there's no way to guarantee that the program will finish; it might get stuck somewhere and loop forever. in fact there's some programs that you straight up can't predict whether they'll ever finish even if you're looking at the code yourself
these two are intrinsically linked. if your program solves a turing complete problem, it needs a turing machine; nothing less will do. and a turing machine is capable of running all such programs, given enough time.
ok. great. what does any of that jargon have to do with AI?
well... the important thing to know is that the machine learning models we're using right now can't loop forever. if they could loop forever they couldn't be trained. for any given input, they'll produce an output in finite time
which means... well, any program that requires a turing machine to run, or even requires a push-down automaton to run (a weaker type of computer system that can get into infinite loops but that you can at least check ahead of time if a program will get stuck or not), can't be emulated by these systems. they've got to be in the next category down: finite state machines at most - and thus unable to compute, or emulate computation of, programs that inhabit a higher tier
and there is a heck of a lot of stuff we conceptualize as "thinking" that doesn't fit in a finite state machine
...I suspect it will some day be possible for a computer program to be a person. I am absolutely certain that when that day comes, the computer program who's a person would require at least a turing machine to run them
what we have right now isn't that. what we have right now is eye spots on moths, bee orchids, mockingbirds. it might be "artificial intelligence", depending on your definition of "intelligence", but prompt it to do things that we've proven only a turing machine can do, and it will fall over
and the reason I consider this an "irrational pet peeve" and not something more severe? is because this information doesn't actually help solve policy questions! if this is a tool, then we still need to decide how we're going to allow such tools to be built, and used. it's not as simple as a blanket ban, and it's not as simple as letting the output of GNNs fully launder the input, because either of those "simple" solutions are rife for abuse
but I can't help but feel like the conversation is in part held back by specious "is a GNN a people" arguments on the one hand, and "can a GNN actually replace writers, or is it just fooling execs into thinking it can" arguments on the other, when the answer to both seems to me like it was solved 40 years ago
228 notes · View notes
witchynek0 · 1 year ago
Text
why I think Ultraman:Rising a one of the better movies in the last months
I watch ALOT of movies, and im not joking, I have alot of free time. and recently I have taken notice that alot of movies, especially animated ones, feel rushed. MAJOR SPOILER ALERT ultra man rising although a long movie, feels really slow. you get to meet Ken Sato. a cocky baseball player, but also; Ultraman! he accidentally picks up a kaiju baby, and is asked to raise it till the baby can go back to its native land, all while fighting the KGB.
ken sato is a cocky baseball player playing for ‘the giants��, with no family, and no friends. he is confident and full of himself, and sure he will make the Giants Great again. all on his own. he doesnt care about his team, and he doesnt care about his reputation. the night of his game he has to work on a Kaiju attack where he saves kaiju baby and takes them to his base. keeping it safe from the KGB, who want to use the baby to find Kaiju Island and kill all Kaiju
i've watched this movie on repeat since it came out, the buildup of the story and the small details and secrets that get revealed as the story progresses keep you guessing and grasping for more. the movie lets you form an attachment to the character, and doesnt rush you by telling you all the ‘lore’ as quickly as possible and then moving on to the problem and how they solve it. 
Ami seems to play a key role in Ken his development, the ‘off the record chat’ seems to open his mind. he asks her how she juggles her job and kid, since he is having a hard time with it. ami says “...they are trying to discover who they are, and what they want. and the only support they have; is us. imperfect messed up us dealing with our own issues. trying to figure out who we are…” 
this strikes ken to free the baby Kaiju, and teaches her Baseball. with trial and error the baby kaiju, named Emi later. explore their bond. safely in the base. the next day when Ken is on a promised follow up interview with Ami, Emi gets out of the base, follows a blip with one of Satos commercials on it. and climbs a tower. ken saves her for falling but breaks her arm in the process and has to ask his dad for help. all of this happens in roughly a hour, see how much that is? how much ive written down about Ken sato and how you got to know him? now this is not all that is shown in the movie, so do watch it if you havent. because this is just the tip of the iceberg. and there is another hour left. 
and in that hour, Emi changes and learns alot. Ken his dad gets more involved. and i havent even mentioned the AI robot that helps ken during this first half of the movie. i dont mention the KGB and their underlying motive. this movie offers so much compared to others ive watched. lets take for example disneys Wish. (dont come at me disney i do still love you) it was fast, i barely got the plot the first time around and i felt rushed and hurried. i watched Wish $ times i still dont really get it. and feel like i havent seen all of the movie. and not just disney, but alot of ‘BIG’ companies lately, have made rushed movies that werent thought through. the story isnt rich and flavourful Pho. but more like a diluted murky beef broth, still good, but less satisfying. the animated movie industry ahs been through some hardship lately, new techniques. and mostly the animators, who work hard for little pay, under alot of stress to make good content cause our attention span doesnt last. and the internet is fast paced. ive made animations, by hand, and its so hard to get it looking good. im here to tell you, Ultraman;Rising is the movie id show people when they ask me “what do you want to do as an animator?” and i say “make something as good as this is.” 
id also like to make it clear that yes, there are good if not better OTHER animated movies, but this is recent and is the best example in the moment.
32 notes · View notes
longlistshort · 1 month ago
Text
Tumblr media
“Radiating Kindness (Oil)”, 2023, Oil on linen
Tumblr media
“Bold Glamour”, 2023, Digital print on linen
For AI Paintings, Matthew Stone‘s 2023 exhibition at The Hole’s Lower East Side location, he explored new ways of using the latest technology while expanding on techniques used in his previous digital creations.
Details from The Hole about this exhibition-
Two LED screens form the center of this show, displaying an unedited stream of novel AI outputs; a new painting every ten seconds. Corresponding in scale to the surrounding works on linen and functioning like smart canvases, these AI paintings transform endlessly and if you’re alone in the gallery, you will be the only person to ever see that version of the artwork.
Stone’s AI paintings—both the tangible on linen and the fleeting screenic pieces—are created through his training of a custom AI model on top of Stable Diffusion’s open source, deep learning, text-to-image model. By feeding it only his past artworks, Stone has created a self-reflexive new series of AI works that disintegrates the hegemony of the singular static masterpiece and problematizes the idea of ownership, or even what “the artwork” itself entails.
AI has become part of contemporary culture, used to solve real world problems and also create TikTok filters. It’s a tool and like a paintbrush it can be used skillfully or not. At the moment AI is throwing the art world into upheaval as artists explore its potential, galleries contend with its disruption of technique and presentation and collectors and museums feel the dissolution of authorship and ownership.
A second type of work makes its debut here, Radiating Kindness (Oil), a 3D printed, machine-assisted oil painting made in collaboration with ARTMATR labs in Red Hook, where MIT artists and engineers have come together to make innovative tools and tech. By leveraging AI, robotics, computer vision and painting scripts, their robot has created a traditional oil painting in three dimensions. You can see on the surface how the interplay between analog and digital mark making is eye-boggling.
The show also includes examples of Stone’s “traditional” technique, which is anything but: on the 13-foot wide linen painting, Irradiance, four nude figures dance over piles of strewn AI paintings. The figures in the foreground, reminiscent in choreography of Henri Matisse’s Dance (La Dance), 1910, are bodacious, athletic women, heavy and sexy like a Michelangelo marble while at the same time futuristic, weightless and splendid in impossible glass and metallic brush marks. Here Stone’s circular and sensitive approach is laid bare for the viewer, the references to art history, technology, culture, access and the pursuit for the intangible is almost overwhelming to grasp.
Stone’s approach points to the deeply interwoven nature of our offline and online lives today. He sees artists’ use of new technologies as necessary, with creatives deploying these tools in a manner that’s not motivated by big tech or financial gains, disrupting the algorithm by creating their own and exploring this new frontier without data-driven deliverables. Creating new context and room for human subjectivities and emotion in the shift from analog to digital that arguably has already occurred.
Below, in an interview for The Standard, he discusses using AI for this work further-
When working with AI, do you sometimes feel overwhelmed or do you always feel in control? I have never felt fully in control while making art and I’ve always been back and forth between wanting to be and understanding the transformative and creative power of just letting go. The most exciting moments in my creative process have often been unexpected mistakes. Those happy mistakes have revealed something that can then be consciously amplified. Using AI creates lots of unexpected outcomes very fast. So as someone who likes accidents in this context of image making, it’s a good way to become accident-prone.
Do you consider AI as just another digital tool? Or does it feel more like a collaboration? In other words, do you sometimes feel AI might develop its own taste, point of view, conscience? It’s a digital tool and I try to resist the urge to anthropomorphize it. But it’s difficult because it feels like such a paradigm shift and also sometimes like dreaming. I think that culturally speaking, we are moving in a direction that assigns these qualities of perceived sentience to AI even when more mundane actions are at play. It’s not clear to me how we will tell if AI has achieved general intelligence, but I think most people will assume it to be the case long before it actually happens, assuming that it does.
3 notes · View notes
anumberofhobbies · 18 days ago
Text
Fearsome tremors at high speed At first glance, levitating a train above magnetic rails seems like a perfect way to escape the constraints of gravity. No contact, no friction, just near-perfect gliding. But in practice, even the tiniest imperfections can cause problems. Deformed bridges, slightly crooked coils, and the cab starts to vibrate at fearsome frequencies. Researchers at the Datong test site in Shanxi province dissected the problem in detail. They discovered that at speeds of 400 kph and above, the cab was subjected to vibrations that were deemed “extremely unpleasant”, reaching a Sperling index of 4.2 at 600 kph. This index, developed in the 1940s, measures passenger discomfort: at 4.2, the vibrations are so strong that they can be harmful to health on long journeys. AI at the service of cushioning The remedy found by these same researchers consists of a hybrid suspension system that combines conventional air springs and electromagnetic actuators. The real innovation here is the on-board AI. Using a “sky-hook” strategy, the algorithm simulates an anchoring point to counter low-frequency jolts. The second technique uses a PID controller: a barbaric acronym for Proportional, Integral and Derivative. In short, the AI constantly adjusts the forces to correct the capsule’s trajectory. To fine-tune the settings, the engineers turned to a NSGA-II genetic algorithm, a bit like a gene selector looking for the most effective combination to keep the capsule stable. Promising tests The results speak for themselves. On a scale model, vertical vibration fell by 45.6%. The Sperling index fell below 2.5, a sign that comfort is now judged to be “pronounced but not unpleasant”. A clear improvement, even at 1,000 km/h.
2 notes · View notes
morlock-holmes · 1 year ago
Text
Against AGI
I don't like the term "AGI" (Short for "Artificial General Intelligence").
Essentially, I think it functions to obscure the meaning of "intelligence", and that arguments about AGIs, alignment, and AI risk involve using several subtly different definitions of the term "intelligence" depending on which part of the argument we're talking about.
I'm going to use this explanation by @fipindustries as my example, and I am going to argue with it vigorously, because I think it is an extremely typical example of the way AI risk is discussed:
In that essay (originally a script for a YouTube Video) @fipindustries (Who in turn was quoting a discord(?) user named Julia) defines intelligence as "The ability to take directed actions in response to stimulus, or to solve problems, in pursuit of an end goal"
Now, already that is two definitions. The ability to solve problems in pursuit of an end goal almost certainly requires the ability to take directed actions in response to stimulus, but something can also take directed actions in response to stimulus without an end goal and without solving problems.
So, let's take that quote to be saying that intelligence can be defined as "The ability to solve problems in pursuit of an end goal"
Later, @fipindustries says, "The way Im going to be using intelligence in this video is basically 'how capable you are to do many different things successfully'"
In other words, as I understand it, the more separate domains in which you are capable of solving problems successfully in pursuit of an end goal, the more intelligent you are.
Therefore Donald Trump and Elon Musk are two of the most intelligent entities currently known to exist. After all, throwing money and subordinates at a problem allows you to solve almost any problem; therefore, in the current context the richer you are the more intelligent you are, because intelligence is simply a measure of your ability to successfully pursue goals in numerous domains.
This should have a radical impact on our pedagogical techniques.
This is already where the slipperiness starts to slide in. @fipindustries also often talks as though intelligence has some *other* meaning:
"we have established how having more intelligence increases your agency."
Let us substitute the definition of "intelligence" given above:
"we have established how the ability to solve problems in pursuit of an end goal increases your agency"
Or perhaps,
"We have established how being capable of doing many different things successfully increases your agency"
Does that need to be established? It seems like "Doing things successfully" might literally be the definition of "agency", and if it isn't, it doesn't seem like many people would say, "agency has nothing to do with successfully solving problems, that's ridiculous!"
Much later:
''And you may say well now, intelligence is fine and all but there are limits to what you can accomplish with raw intelligence, even if you are supposedly smarter than a human surely you wouldn’t be capable of just taking over the world uninmpeeded, intelligence is not this end all be all superpower."
Again, let us substitute the given definition of intelligence;
"And you may say well now, being capable of doing many things successfully is fine and all but there are limits to what you can accomplish with the ability to do things successfully, even if you are supposedly much more capable of doing things successfully than a human surely you wouldn’t be capable of just taking over the world uninmpeeded, the ability to do many things successfully is not this end all be all superpower."
This is... a very strange argument, presented as though it were an obvious objection. If we use the explicitly given definition of intelligence the whole paragraph boils down to,
"Come on, you need more than just the ability to succeed at tasks if you want to succeed at tasks!"
Yet @fipindustries takes it as not just a serious argument, but an obvious one that sensible people would tend to gravitate towards.
What this reveals, I think, is that "intelligence" here has an *implicit* definition which is not given directly anywhere in that post, but a number of the arguments in that post rely on said implicit definition.
Here's an analogy; it's as though I said that "having strong muscles" is "the ability to lift heavy weights off the ground"; this would mean that, say, a 98lb weakling operating a crane has, by definition, stronger muscles than any weightlifter.
Strong muscles are not *defined* as the ability to lift heavy objects off the ground; they are a quality which allow you to be more efficient at lifting heavy objects off the ground with your body.
Intelligence is used the same way at several points in that talk; it is discussed not as "the ability to successfully solve tasks" but as a quality which increases your ability to solve tasks.
This I think is the only way to make sense of the paragraph, that intelligence is one of many qualities, all of which can be used to accomplish tasks.
Speaking colloquially, you know what I mean if I say, "Having more money doesn't make you more intelligent" but this is an oxymoron if we define intelligence as the ability to successfully accomplish tasks.
Rather, colloquially speaking we understand "intelligence" as a specific *quality* which can increase your ability to accomplish tasks, one of *many* such qualities.
Say we want to solve a math problem; we could reason about it ourselves, or pay a better mathematician to solve it, or perhaps we are very charismatic and we convince a mathematician to solve it.
If intelligence is defined as the ability to successfully solve the problem, then all of those strategies are examples of intelligence, but colloquially, we would really only refer to the first as demonstrating "intelligence".
So what is this mysterious quality that we call "intelligence"?
Well...
This is my thesis, I don't think people who talk about AI risk really define it rigorously at all.
For one thing, to go way back to the title of this monograph, I am not totally convinced that a "General Intelligence" exists at all in the known world.
Look at, say, Michael Jordan. Everybody agrees that he is an unmatched basketball player. His ability to successfully solve the problems of basketball, even in the face of extreme resistance from other intelligent beings is very well known.
Could he apply that exact same genius to, say, advancing set theory?
I would argue that the answer is no, because he couldn't even transfer that genius to baseball, which seems on the surface like a very closely related field!
It's not at all clear to me that living beings have some generalized capacity to solve tasks; instead, they seem to excel at some and struggle heavily with others.
What conclusions am I drawing?
Don't get me wrong, this is *not* an argument that AI risk cannot exist, or an argument that nobody should think about it.
If anything, it's a plea to start thinking more carefully about this stuff precisely because it is important.
So, my first conclusion is that, lacking a model for a "General Intelligence" any theorizing about an "Artificial General Intelligence" is necessarily incredibly speculative.
Second, the current state of pop theory on AI risks is essentially tautology. A dangerous AGI is defined as, essentially, "An AI which is capable of doing harmful things regardless of human interference." And the AI safety rhetoric is "In order to be safe, we should avoid giving a computer too much of whatever quality would render it unsafe."
This is essentially useless, the equivalent of saying, "We need to be careful not to build something that would create a black hole and crush all matter on Earth into a microscopic point."
I certainly agree with the sentiment! But in order for that to be useful you would have to have some idea of what kind of thing might create a black hole.
This is how I feel about AI risk. In order to talk about what it might take to have a safe AI, we need a far more concrete definition than "Some sort of machine with whatever quality renders a machine uncontrollable".
27 notes · View notes
spacetimewithstuartgary · 2 months ago
Text
Tumblr media
AI finds new ways to observe the most extreme events in the universe
Extreme cosmic events such as colliding black holes or the explosions of stars can cause ripples in spacetime, so-called gravitational waves. Their discovery opened a new window into the universe. To observe them, ultra-precise detectors are required. Designing them remains a major scientific challenge for humans. Researchers at the Max Planck Institute for the Science of Light (MPL) have been working on how an artificial intelligence system could explore an unimaginably vast space of possible designs to find entirely new solutions. The results were recently published in the journal ›Physical Review X‹.
More than a century ago, Einstein theoretically predicted gravitational waves. They could only be directly detected in 2016 because the development of the necessary detectors was extremely complex. Dr. Mario Krenn, head of the research group ›Artificial Scientist Lab‹ at MPL, in collaboration with the team of LIGO (“Laser Interferometer Gravitational-Wave Observatory”), who built those detectors successfully, has designed an AI-based algorithm called ›Urania‹ to design novel interferometric gravitational wave detectors. Interferometry describes a measurement method which uses the interference of waves, i.e. their superposition when they meet. Detector design requires optimizing both layout and parameters. The scientists have converted this challenge into a continuous optimization problem and solved it using methods inspired by modern machine learning. They have found many new experimental designs which outperform the best known next-generation detectors. These results have the potential to improve the range of detectable signals by more than an order of magnitude.
Nonconformist and creative: that’s what Urania discovered
In the algorithm’s solutions, the researchers rediscovered numerous known techniques. ›Urania‹ also proposed unorthodox designs which could reshape our understanding of detector technology. “After roughly two years of developing and running our AI algorithms, we discovered dozens of new solutions that seem to be better than experimental blueprints by human scientists. We asked ourselves what humans overlooked in comparison to the machine,” says Krenn. The researchers expanded their scientific approach to understand the AI-discovered tricks, ideas, and techniques. Many of them are still completely alien to them. They have compiled 50 top-performing designs in a public ›Detector Zoo‹ and made them available to the scientific community for further research.
The recently published work shows that AI can uncover novel detector designs and inspire human researchers to explore new experimental and theoretical ideas. More broadly, it suggests that AI could play a major role in designing future tools for exploring the universe, from the smallest to the largest scales. “We are in an era where machines can discover new super-human solutions in science, and the task of humans is to understand what the machine has done. This will certainly become a very prominent part of the future of science,” says Krenn.
IMAGE: Illustration of the first gravitational wave event observed by LIGO. The detected wave forms from LIGO Hanford (orange) and LIGO Livingston (blue) are superimposed beneath illustrations of the merging black holes. Credit Aurore Simmonet (Sonoma State University), Courtesy Caltech/MIT/LIGO Laboratory
2 notes · View notes
the-real-wholesome-bitch · 8 months ago
Text
Mindless consumption and AI
Ok, so I am a computer science student and an artist, and quite frankly, I hate AI. I think it is just encouraging the mindless consumption of content rather than the creation of art and things that we enjoy. People are trying to replace human-created art with AI art, and quite frankly, that really is just a head-scratcher. The definition of art from Oxford Languages is as follows: “the expression or application of human creative skill and imagination, typically in a visual form such as painting or sculpture, producing works to be appreciated primarily for their beauty or emotional power.” The key phrase here is “human creative skill”; art is inherently a human trait. I think it is cool that we are trying to teach machines how to make art; however, can we really call it art based on the definition we see above? About two years ago, I wrote a piece for my school about AI and art (I might post it; who knows?), where I argued that AI art is not real art.
Now, what about code? As a computer science student, I kind of like AI in the sense that it can overlook my code and tell me what is wrong with it, as well as how to improve it. It can also suggest sources for a research paper and check my spelling (which is really bad; I used it for this). Now, AI can also MAKE code, and let me tell you, my classmates abuse this like crazy. Teachers and TAs are working overtime to look through all the code that students submit to find AI-generated code (I was one of them), and I’ll be honest, it’s really easy to find!
People think that coding is a very rigid discipline, and yes, you do have to be analytical and logical to come up with code that works; however, you also have to be creative. You have to be creative to solve the problems that you are given, and just like with art, AI can’t be creative. Sure, it can solve simple tasks like making an array that takes in characters and reverses the order to print the output. But it can’t solve far more complex problems, and when students try to use it to find solutions, it breaks. The programs that it generates just don’t work and/or it makes up some nonsense.
And as more AI content fills the landscape, it’s just getting shittier and shittier. Now, how does the mindless consumption of content relate to this? You see, I personally think it has a lot to do with it. We have been consuming information and content for a long time, but the amount of content that exists in this world is greater than ever before, and AI “content” is just adding to this junkyard. No longer are people trying to understand the many techniques of art and develop their own styles (this applies to all art forms, such as visual art, writing, filmmaking, etc.). People will simply just enter a prompt into Midjourney and BOOM, you have multiple “art pieces�� in different styles (which were stolen from real artists), and you can keep regenerating until you get what you want. You don’t have to do the hard work of learning how to draw, developing an art style, and doing it until you get it right. You can “create” something quickly for instant gratification; you can post it, and someone will look at it. Maybe they will leave a like on it; they might even keep scrolling and see more and more AI art, therefore leading to mindless consumption.
4 notes · View notes
areyouscaredyet · 3 months ago
Note
Love seeing your art on my dash, been following u for a while and i enjoy seeing your stuff from other fandoms too. Do you have any tips on picking colors?
hi anon! first of all, thank you so so much for this! i'm happy to know that there are people who don't mind my inconsistent posting habits lol. i have such a hard time staying still and the same can be said for my fandoms HAha for your question-- prepare for rambling!! as an educator, i simply can't refuse the opportunity to talk about art!! (you have been warned!!!)
i don't wanna make assumptions about your own art experiences, but i love to talk and for the sake of other people seeing this, i'm gonna rip the band-aid off RIGHT NOW.
the best thing you could ever do for your artistic practice is
LOOK AT ART ALL THE TIME.
it doesn't matter what era of art history, if it's contemporary, what medium it's in, none of that! just keep looking at art, you'll pick up on things naturally as you gain familiarity with different artists and experience in close-looking. you'll find what jives with you best based on your own aesthetic and thematic preferences.
(to talk about myself VERY briefly) i'm a painter-- most of my formal art training is in oil painting, so a lot of my influences for specific technique and color language come from painters. i bring this up so that you understand that what i'm saying is malleable and also so you trust what i'm gonna say! you don't have to look at the specific artists i'm gonna bring up, it's just so you can get a sense of thought process.
(read more for the best advise you'll ever hear)
Tumblr media
REFERENCES ARE YOUR LIFE-BLOOD
nothing is original anymore, and it never was! not even in 1200 BCE or whenever else. when thinking about a piece, i try to consider as much as possible at the beginning, and then i start pulling references. people have been studying and experimenting with color since the dawn of time, why make things harder by forcing yourself to figure everything out on your own? having a repertoire of images to pull parts from can be extremely helpful and make things move much more smoothly.
my work is very influenced by historic christian art and more recently, a fuck ton of sci-fi shit. finding specific artists that make this type of work was important to me, because i had something i could concretely look into if i needed to solve a specific problem with a painting, or if i had a question that needed answering. for example: because of common trends throughout art history, certain lighting conditions and certain color choices already have certain associations! you don't need to memorize anything like that, but a quick google search (after you sort through the AI bullshit) can take you a long way!
off the top of my head, here are some really great artists to look at if you're interested in the kind of color fields that i explore!
Paulo Veronese
a 16th century italian painter (boring i know!! pls don't run away!!)
if you spend some time with his work you'll notice that his use of color feels very contemporary for someone born in 1528. a fun fact: green is notoroius for how difficult it is to wrangle in a painting. this guy, however, was SO good at painting with green, that they named a shade of green after him! it's aptly called Veronese Green!
Tumblr media Tumblr media
Zdzisław Beksiński
"I wish to paint in such a manner as if I were photographing dreams" 
a must-know if you are interested in creepy sci-fi landscapes and eerie color pallets! he was a 20th century polish painter who worked in a lot of thin layers, building up these super wonderful and atmospheric pieces. he kept painting over and editing his works until he was satisfied- he had a bit of a weird attitude towards painting lol. he also had an extremely interesting life- i recommend everyone look into him just for the sake of hearing about the most tragic man ever
Tumblr media Tumblr media
Sleeping By the Mississippi - Alec Soth
a lovely photo book from 2004 that uses vibrant color and funky composition to represent places visited by the artist on a series of road trips along, well, the Mississippi! to me, the book feels like a Nutcracker-esque exploration of the American Midwest, and does a great job of chronicling the connections formed while on his journey. everything is very quiet, but the use of color makes the world feel lived in and whimsical. each subject has their own soul. even if there's no person present in a photo, the energy of the image is palpable.
Tumblr media Tumblr media
Reconciliation - Billie Mendel
an AMAZING study of light and color. this artist spent hours of her time in the confessional booths of churches taking long-exposure photographs. the delicate way the light reflects off of different surfaces, pierces through cracks and holes, and washes over walls makes the spaces feel inhabited by actual spirits. there's an eerie, yet comfortable, feeling i feel when i look at these photos. it does an amazing job of visually portraying silence. this book was so important for me in helping with color pallets and lighting conditions!
Tumblr media Tumblr media
i chose these example to show that looking at artists who use different mediums can also be a great way of seeing how people approach things from different perspectives!
anyways, sorry for the rambling! this was a long and convoluted way of telling you to just take color pallets and lighting conditions from other artists hAHA-- but hopefully this was helpful to you! if you want to reach out to talk about art or finding artists to look at, I would be more than happy to! the beautiful thing about the world is that there are millions of makers that exist RIGHT NOW, and you can find them at the click of a button!
thanks again, anon! best of luck with your creative journey :)
Tumblr media
2 notes · View notes